Goto

Collaborating Authors

 human computation and crowdsourcing


Reports of the Workshops Held at the 2022 AAAI Conference on Human Computation and Crowdsourcing

Interactive AI Magazine

The 10th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2022) will be held November 6-10th as a virtual conference. HCOMP is the premier venue for disseminating the latest research findings on human computation and crowdsourcing. While artificial intelligence (AI) and human-computer interaction (HCI) represent traditional mainstays of the conference, HCOMP believes strongly in fostering and promoting broad, interdisciplinary research. Our field is particularly unique in the diversity of disciplines it draws upon and contributes to, including human-centered qualitative studies and HCI design, social computing, artificial intelligence, economics, computational social science, digital humanities, policy, and ethics. We promote the exchange of advances in human computation and crowdsourcing not only among researchers but also engineers and practitioners to encourage dialogue across disciplines and communities of practice.


In Search of Ambiguity: A Three-Stage Workflow Design to Clarify Annotation Guidelines for Crowd Workers

Pradhan, Vivek Krishna, Schaekermann, Mike, Lease, Matthew

arXiv.org Artificial Intelligence

While crowdsourcing now enables labeled data to be obtained more quickly, cheaply, and easily than ever before (Snow et al., 2008; Alonso, 2015; Sorokin and Forsyth, 2008), ensuring data quality remains something of an art, challenge, and perpetual risk. Consider a typical workflow for annotating data on Amazon Mechanical Turk (MTurk): a requester designs an annotation task, asks multiple workers to complete it, and then post-processes labels to induce final consensus labels. Because the annotation work itself is largely opaque, with only submitted labels being observable, the requester typically has little insight into what if any problems workers encounter during annotation. While statistical aggregation (Sheshadri and Lease, 2013; Hung et al., 2013; Zheng et al., 2017) and multi-pass iterative refinement (Little et al., 2010a; Goto et al., 2016) methods can be employed to further improve initial labels, there are limits to what can be achieved by post-hoc refinement following label collection. If initial labels are poor because many workers were confused by incomplete, unclear, or ambiguous task instructions, there is a significant risk of "garbage in equals garbage out" (Vidgen and Derczynski, 2020). In contrast, consider a more traditional annotation workflow involving trusted annotators, such as practiced by the Linguistic Data Consortium (LDC) (Griffitt and Strassel, 2016).


Human Computation for Image and Video Analysis

Luther, Kurt (Virginia Tech)

AI Magazine

This was the second meeting of the GroupSight workshop to be held at the AAAI Conference on Human Computation and Crowdsourcing (HCOMP). It was also the first time the workshop and conference were colocated with the ACM Conference on User Interface Software and Technology. The workshop was held in Quebec City, Quebec, Canada, on October 24, 2017. The workshop featured two keynote speakers in humancomputer interaction (HCI) doing research on crowdsourced image analysis. The Workshop Was Held in Quebec City, Quebec, Canada.


AAAI Conferences Calendar

Editor, Managing (AAAI)

AI Magazine

This page includes forthcoming AAAI sponsored conferences, conferences presented by AAAI Affiliates, and conferences held in cooperation with AAAI. AI Magazine also maintains a calendar listing that includes nonaffiliated conferences at www.aaai.org/Magazine/calendar.php. ICAPS-19 will be AAAI Spring Symposium Series. The held 11-15 July 2019 in Berkeley, California, AAAI 2019 Spring Symposium Series USA. will be held 25-27 March 2019, at ICWSM-19 will be held 11-14 June in Munich, Germany. The Seventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2019) will be held 28-30 October at the Skamania Lodge, in Stevenson Washington USA.


Report on the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018)

Chen, Yiling (Harvard University) | Kazai, Gabriella (Lumi, Semion Ltd)

AI Magazine

This year's conference broke a number of traditions set in America, HCOMP 2018 returned to Europe, where the very first HCOMP workshop had taken place in 2009. Besmira Nushi, Ece Kamar, and Eric interdisciplinary communities, we fostered new connections Horvitz were also singled out with an honorable among collective intelligence, crowdsourcing, mention for their paper "Towards Accountable AI: and human computation scholars and practitioners, Hybrid Human-Machine Analyses for Characterizing across diverse fields including humancomputer System Failure." Finally, Vikram Mohanty, David interaction (HCI), artificial intelligence, Thames, and Kurt Luther's presentation, "Are 1,000 economics, business, and design. Features Worth A Picture? Combining Crowdsourcing HCOMP was started by researchers from diverse and Face Recognition to Identify Civil War Soldiers," fields who wanted a high-quality scholarly venue for was given the Best Poster / Demo Presentation the review and presentation of the highest quality award. For this, we invited previous AAAI HCOMP conferences (and four submissions to a Works-in-Progress (WIP) and HCOMP workshops before that) to promote the most Demonstrations track, co-organized by Alessandro rigorous and exciting scholarship in this fast-emerging, Bozzon (Delft University of Technology) and Matteo multidisciplinary area.


Reports of the Workshops Held at the Sixth AAAI Conference on Human Computation and Crowdsourcing

Aroyo, Lora (Vrije Universiteit Amsterdam) | Dumitrache, Anca (Vrije Universiteit Amsterdam) | Nickerson, Jeffrey V. (Stevens Institute of Technology) | Lease, Matthew (University of Texas at Austin) | Michelucci, Pietro (Cornell University)

AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligence’s Sixth AAAI Conference on Human Computation and Crowdsourcing was held on the campus of the University of Zurich in Zurich, Switzerland on 5 July 2018. There were three full-day workshops in the program: CrowdBias: Disentangling the Relation between Crowdsourcing and Bias Management; Subjectivity, Ambiguity, and Disagreement in Crowdsourcing; Work in the Age of Intelligent Machines; a three-quarter day workshop, Advancing Human Computation with Complexity Science; and Project Networking; and a quarter day Project Networking workshop. This report contains summaries of three of the events.  


Flexible Reward Plans to Elicit Truthful Predictions in Crowdsourcing

Sakurai, Yuko (Kyushu University) | Oyama, Satoshi (Hokkaido University) | Shinoda, Masato (Nara Women's University) | Yokoo, Makoto (Kyushu University)

AAAI Conferences

We develop a flexible reward plan to elicit truthful predictive probability distribution over a set of uncertain events from workers.  In our reward plan, the principal can assign rewards for incorrect predictions according to her similarity between events.  In the spherical proper scoring rule, a worker's expected utility is represented as the inner product of her truthful predictive probability and her declared probability. We generalize the inner product by introducing a reward matrix that defines a reward for each prediction-outcome pair. We show that if the reward matrix is symmetric and positive definite, the spherical proper scoring rule guarantees the maximization of a worker's expected utility when she truthfully declares her prediction.


How Effective an Odd Message Can Be: Appropriate and Inappropriate Topics in Speech-Based Vehicle Interfaces

Sirkin, David (Stanford University) | Fischer, Kerstin (Southern Denmark University) | Jensen, Lars (Southern Denmark University) | Ju, Wendy (Stanford University and California College of the Arts)

AAAI Conferences

Dialog between drivers and speech-based vehicle interfaces can be used as an instrument to find out what drivers might be concerned, confused or curious about in driving simulator studies. Eliciting on-going conversation with drivers about topics that go beyond navigation, control of entertainment systems, or other traditional driving related tasks is important to getting drivers to engage with the activity in an open-ended fashion. In a structured improvisational Wizard of Oz study that took place in a highly immersive driving simulator, we engaged participant drivers (N=6) in an autonomous driving course where the vehicle spoke to drivers using computer-generated natural language speech. Using microanalyses of the drivers’ responses to the car’s utter- ances, we identify a set of topics that are expected and treated as appropriate by the participants in our study, as well as a set of topics and conversational strategies that are treated as inappropriate. We also show that it is just these unexpected, inappropriate utterances that eventually increase users’ trust in the system, make them more at ease, and raise the system’s acceptability as a communication partner.


A Game with a Purpose for Recommender Systems

Smyth, Barry (University College Dublin) | Rafter, Rachael (University College Dublin) | Banks, Sam (University College Dublin)

AAAI Conferences

Recommender systems learn about our preferences to make targeted suggestions. In this paper we outline a novel game-with-a-purpose designed to infer preferences at scale as a side-effect of gameplay. We evaluate the utility of this data in a recommendation context as part of a small live-user trial.


A GWAP Approach for Collecting Qualitative Product Attributes and Perceptual Mapping

Miyashita, Eiji (Aoyama Gakuin University) | Nonaka, Tomomi (Aoyama Gakuin University) | Mizuyama, Hajime (Aoyama Gakuin University)

AAAI Conferences

Further, the raw data collected For a company to survive, it is important to develop new by these games are usually in the form of a word or products and services appealing to consumers. Thus, the a phrase, rather than lengthy sentences typical in ordinary company must comprehend the preferences of target consumers, questionnaires, and game logs are also available as supplemental for example, by capturing how they perceive related data. Thus, it will be easier to convert the raw products currently available in the market.